1,085 research outputs found
Gradient methods for minimizing composite objective function
In this paper we analyze several new methods for solving optimization problems with the objective function formed as a sum of two convex terms: one is smooth and given by a black-box oracle, and another is general but simple and its structure is known. Despite to the bad properties of the sum, such problems, both in convex and nonconvex cases, can be solved with efficiency typical for the good part of the objective. For convex problems of the above structure, we consider primal and dual variants of the gradient method (converge as O (1/k)), and an accelerated multistep version with convergence rate O (1/k2), where k isthe iteration counter. For all methods, we suggest some efficient "line search" procedures and show that the additional computational work necessary for estimating the unknown problem class parameters can only multiply the complexity of each iteration by a small constant factor. We present also the results of preliminary computational experiments, which confirm the superiority of the accelerated scheme.local optimization, convex optimization, nonsmooth optimization, complexity theory, black-box model, optimal methods, structural optimization, l1- regularization
Local quadratic convergence of polynomial-time interior-point methods for conic optimization problems
In this paper, we establish a local quadratic convergence of polynomial-time interior-point methods for general conic optimization problems. The main structural property used in our analysis is the logarithmic homogeneity of self-concordant barrier functions. We propose new path-following predictor-corrector schemes which work only in the dual space. They are based on an easily computable gradient proximity measure, which ensures an automatic transformation of the global linear rate of convergence to the local quadratic one under some mild assumptions. Our step-size procedure for the predictor step is related to the maximum step size (the one that takes us to the boundary). It appears that in order to obtain local superlinear convergence, we need to tighten the neighborhood of the central path proportionally to the current duality gapconic optimization problem, worst-case complexity analysis, self-concordant barriers, polynomial-time methods, predictor-corrector methods, local quadratic convergence
A Subgradient Method for Free Material Design
A small improvement in the structure of the material could save the
manufactory a lot of money. The free material design can be formulated as an
optimization problem. However, due to its large scale, second-order methods
cannot solve the free material design problem in reasonable size. We formulate
the free material optimization (FMO) problem into a saddle-point form in which
the inverse of the stiffness matrix A(E) in the constraint is eliminated. The
size of A(E) is generally large, denoted as N by N. This is the first
formulation of FMO without A(E). We apply the primal-dual subgradient method
[17] to solve the restricted saddle-point formula. This is the first
gradient-type method for FMO. Each iteration of our algorithm takes a total of
foating-point operations and an auxiliary vector storage of size O(N),
compared with formulations having the inverse of A(E) which requires
arithmetic operations and an auxiliary vector storage of size . To
solve the problem, we developed a closed-form solution to a semidefinite least
squares problem and an efficient parameter update scheme for the gradient
method, which are included in the appendix. We also approximate a solution to
the bounded Lagrangian dual problem. The problem is decomposed into small
problems each only having an unknown of k by k (k = 3 or 6) matrix, and can be
solved in parallel. The iteration bound of our algorithm is optimal for general
subgradient scheme. Finally we present promising numerical results.Comment: SIAM Journal on Optimization (accepted
Confidence level solutions for stochastic programming
We propose an alternative approach to stochastic programming based on Monte-Carlo sampling and stochastic gradient optimization. The procedure is by essence probabilistic and the computed solution is a random variable. The associated objective value is doubly random, since it depends on two outcomes: the event in the stochastic program and the randomized algorithm. We propose a solution concept in which the probability that the randomized algorithm produces a solution with an expected objective value departing from the optimal one by more than is small enough. We derive complexity bounds for this process. We show that by repeating the basic process on independent sample, one can significantly sharpen the complexity bounds
- …